EN FR
EN FR


Section: Partnerships and Cooperations

International Initiatives

Inria Associate Teams

  • Inria principal investigator: Mohammad Ghavamzadeh and Rémi Munos

    • Institution: McGill university (Canada)

    • Laboratory: Reasoning and Learning Lab

    • Principal investigator:

      • Prof. Joelle Pineau Collaborator

      • Prof. Doina Precup Collaborator

      • Amir massoud Farahmand Collaborator

  • Duration: January 2013 - January 2015

Inria International Partners

Declared Inria International Partners
  • Ronald Ortner and Peter Auer: Montanuniversität Leoben (Austria).

  • Reinforcement learning (RL) deals with the problem of interacting with an unknown stochastic environment that occasionally provides rewards, with the goal of maximizing the cumulative reward. The problem is well-understood when the unknown environment is a finite-state Markov process. This collaboration is centered around reducing the general RL problem to this case.

    In particular, the following problems are considered: representation learning, learning in continuous-state environments, bandit problems with dependent arms, and pure exploration in bandit problems. On each of these problems we have successfully collaborated in the past, and plan to sustain this collaboration possibly extending its scopes.

Informal International Partners
  • eHarmony Research, California.

    • Václav Petříček Collaborator

      Michal Valko has started to collaborate with eHarmony on sequential decision making for online dating and offline evaluation.

  • University of Alberta, Edmonton, Alberta, Canada.

    • Csaba Szepesvári and Bernardo Avila Pires Collaborator

      We have been collaborating on the topic of risk bounds in cost-sensitive multiclass classification this year. We have an accepted paper [8] at ICML.

  • Technion - Israel Institute of Technology, Haifa, Israel.

    • Odalric-Ambrym Maillard Collaborator

      Daniil Ryabko has worked with Odalric Maillard on representation learning for reinforcement learning problems. It led to a paper in AISTATS [21] .

  • School of Computer Science, Carnegie Mellon University, USA.

    • Prof. Emma Brunskill Collaborator

    • Mohammad Gheshlaghi Azar, PhD Collaborator

      A. Lazaric started a profitable collaboration on transfer in multi-arm bandit and reinforcement learning which led to two publications at ECML and NIPS. We are currently working on extensions of the previous algorithms and development of novel regret minimisation algorithms in non-iid settings.

  • Technicolor Research, Palo Alto.

    • Branislav Kveton Collaborator

      Michal Valko and Rémi Munos worked with Branislav on Spectral Bandits aimed at recommendation for the entertainment content recommendation. Michal continued the ongoing research on online semi-supervised learning and this year delivered the algorithm for a challenging single picture per person setting [19] . Victor Gabillon has spent 6 month at Technicolor as an intern to work on the sequential learning with submodularity, which resulted in 1 accepted paper at NIPS and two submissions to ICML.